Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo

ثبت نشده
چکیده

This is the supplementary file of the paper: “Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo”. In Appendix A, we provide deferred proofs of the results in the paper. In Appendix B, we describe the statistical analysis for OPS with general ✏. In Appendix C, we discuss a differential private extension of Stochastic Gradient Fisher Scoring (SGFS). The subsequent appendices are about a qualitative experiment, additional discussions on the proposed methods and relationships to existing work. A. Proofs Proof of Theorem 1. The posterior distribution p(✓|x 1 , ...,x n ) = Qn i=1 p(xi|✓)p(✓) R ✓ Qn i=1 p(xi|✓)p(✓)d✓ . For any x 1 , ...,x n , x0

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo

We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to “differential privacy”, a cryptographic approach to protect individual-level privacy while permiting database-level utility. Specifically, we show that that under standard assumptions, getting one single sample from a posterior distribution ...

متن کامل

On Connecting Stochastic Gradient MCMC and Differential Privacy

Significant success has been realized recently on applying machine learning to real-world applications. There have also been corresponding concerns on the privacy of training data, which relates to data security and confidentiality issues. Differential privacy provides a principled and rigorous privacy guarantee on machine learning models. While it is common to design a model satisfying a requi...

متن کامل

Stochastic Gradient Monomial Gamma Sampler

Recent advances in stochastic gradient techniques have made it possible to estimate posterior distributions from large datasets via Markov Chain Monte Carlo (MCMC). However, when the target posterior is multimodal, mixing performance is often poor. This results in inadequate exploration of the posterior distribution. A framework is proposed to improve the sampling efficiency of stochastic gradi...

متن کامل

CPSG-MCMC: Clustering-Based Preprocessing method for Stochastic Gradient MCMC

In recent years, stochastic gradient Markov Chain Monte Carlo (SG-MCMC) methods have been raised to process large-scale dataset by iterative learning from small minibatches. However, the high variance caused by naive subsampling usually slows down the convergence to the desired posterior distribution. In this paper, we propose an effective subsampling strategy to reduce the variance based on a ...

متن کامل

Bayesian Learning via Stochastic Gradient Langevin Dynamics

In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and Bayesi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015